The reduced nearest neighbor rule (Corresp.)

نویسنده

  • Geoffrey W. Gates
چکیده

Fig. 3 shows PG,.,,,,, (m) for various numbers of observations N and various sizes of memory m. The Chernoff bound on P,*(a) was used for N 2 32. Quite naturally one does better with more memory. The P~,sym(m) curve for any given value of m follows the P,*(co) line for low values of N, diverges from it for larger values of N, and approaches a nonzero limit P,*(m) as N + co. This behavior is easily explained. Any given machine can “remember” all of the observations for low values of N. Here infinite memory offers no advantages. For larger values of N, a finite-state machine necessarily loses some information and thus does not do so well as one with infinite memory. As N -+ co, Pz sym(m) approaches Pm*(m), the infinite-time lower bound on the probability of error, since from [I] we know that for N = co the optimal machine is symmetric.

منابع مشابه

Edge Detection Based On Nearest Neighbor Linear Cellular Automata Rules and Fuzzy Rule Based System

 Edge Detection is an important task for sharpening the boundary of images to detect the region of interest. This paper applies a linear cellular automata rules and a Mamdani Fuzzy inference model for edge detection in both monochromatic and the RGB images. In the uniform cellular automata a transition matrix has been developed for edge detection. The Results have been compared to the ...

متن کامل

Edge Detection Based On Nearest Neighbor Linear Cellular Automata Rules and Fuzzy Rule Based System

 Edge Detection is an important task for sharpening the boundary of images to detect the region of interest. This paper applies a linear cellular automata rules and a Mamdani Fuzzy inference model for edge detection in both monochromatic and the RGB images. In the uniform cellular automata a transition matrix has been developed for edge detection. The Results have been compared to the ...

متن کامل

The condensed nearest neighbor rule using the concept of mutual nearest neighborhood (Corresp.)

Consider a set of N objects, consisting of Ni objects originating from population Ai, i = 1,. . . , k, so that N = Zf-, Ni. In order to classify the set of N objects, we measure a variable with pdf A for objects originating from population Ai. Let x1,x2;. .,x, be the measured values of the N objects, which are assumed to be independent. Now consider the N-tuple Gi = (gljt g2j, * * ’ 3 Sp ’ ’ ’ ...

متن کامل

An upper bound on prototype set size for condensed nearest neighbor

The condensed nearest neighbor (CNN) algorithm is a heuristic for reducing the number of prototypical points stored by a nearest neighbor classifier, while keeping the classification rule given by the reduced prototypical set consistent with the full set. I present an upper bound on the number of prototypical points accumulated by CNN. The bound originates in a bound on the number of times the ...

متن کامل

Structural risk minimization using nearest neighbor rule

We present a novel nearest neighbor rule-based implementation of the structural risk minimization principle to address a generic classification problem. We propose a fast reference set thinning algorithm on the training data set similar to a support vector machine approach. We then show that the nearest neighbor rule based on the reduced set implements the structural risk minimization principle...

متن کامل

Nearest-Neighbor Classification Rule

In this slecture, basic principles of implementing nearest neighbor rule will be covered. The error related to the nearest neighbor rule will be discussed in detail including convergence, error rate, and error bound. Since the nearest neighbor rule relies on metric function between patterns, the properties of metrics will be studied in detail. Example of different metrics will be introduced wit...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

متن کامل
عنوان ژورنال:
  • IEEE Trans. Information Theory

دوره 18  شماره 

صفحات  -

تاریخ انتشار 1972